Language tasks involving character-level manipulations (e.g., spelling correction, many word games) are challenging for models based in subword tokenization. To address this, we adapt the interchange intervention training method of Geiger et al. (2021) to operate on type-level variables over characters. This allows us to encode robust, position-independent character-level information in the internal representations of subword-based models. We additionally introduce a suite of character-level tasks that systematically vary in their dependence on meaning and sequence-level context. While simple character-level tokenization approaches still perform best on purely form-based tasks like string reversal, our method is superior for more complex tasks that blend form, meaning, and context, such as spelling correction in context and word search games. Our approach also leads to subword-based models with human-intepretable internal representations of characters.
translated by 谷歌翻译
NLP系统的解释性方法遇到了因果推断的基本问题的版本:对于给定的基础真相输入文本,我们从未真正观察到隔离模型表示对输出的因果影响所必需的反事实文本。作为回应,许多解释性方法不使用反事实文本,假设它们将是不可用的。在本文中,我们表明可以使用近似反事实来创建强大的因果解释方法,该方法可以由人类写成近似特定的反事实或简单地使用元数据指导的启发式启发式启示术进行采样。我们提案的核心是因果替代模型(CPM)。 CPM解释了一个黑框$ \ Mathcal {n} $,因为它经过培训可以具有与$ \ Mathcal {n} $相同的实际输入/输出行为,而创建可以介入的神经表示,以模拟反事实输入/$ \ MATHCAL {N} $的输出行为。此外,我们证明了$ \ Mathcal {n} $的最佳CPM在做出事实预测时性能与$ \ Mathcal {n} $相当地执行,这意味着CPM可以简单地替换$ \ Mathcal {n} $,从而导致更多信息可解释的部署模型。我们的代码可在https://github.com/frankaging/causal-proxy-model上找到。
translated by 谷歌翻译
人类具有以零拍的方式识别和获取新颖的视觉概念的非凡能力。考虑到以前学到的视觉概念及其关系的高级,象征性的描述,人类可以识别新颖的概念而不看到任何例子。此外,他们可以通过学习视觉概念和关系来解析和传达符号结构来获取新概念。赋予机器中的这些功能在提高推理时提高其概括能力方面至关重要。在这项工作中,我们介绍了零拍的概念识别和获取(ZEROC),这是一种神经符号结构,可以以零拍的方式识别和获取新颖的概念。 ZEROC代表概念作为组成概念模型的图(作为节点)及其关系(作为边缘)。为了允许推理时间组成,我们采用基于能量的模型(EBM)来建模概念和关系。我们设计ZEROC架构,以便它允许在概念的符号图结构及其相应的EBM之间进行一对一的映射,该图是第一次允许获取新概念,传达其图形结构并将其应用于分类和分类和在推理时检测任务(甚至跨域)。我们介绍了用于学习和推断ZEROC的算法。我们在一个充满挑战的网格世界数据集上评估了零,该数据集旨在探测零拍的概念识别和获取,并展示其功能。
translated by 谷歌翻译
蒸馏工作导致语言模型更紧凑,没有严重的性能下降。蒸馏的标准方法培训了针对两个目标的学生模型:特定于任务的目标(例如,语言建模)和模仿目标,并鼓励学生模型的隐藏状态与较大的教师模型类似。在本文中,我们表明,增强蒸馏有利于第三个目标,鼓励学生通过交换干预培训(IIT)来模仿教师的因果计算过程。 IIT推动学生模型成为教师模型的因果抽象 - 一种具有相同因果结构的更简单的模型。 IIT是完全可差异的,容易实施,并与其他目标灵活结合。与伯特标准蒸馏相比,通过IIT蒸馏导致维基百科(屏蔽语言建模)逐步困惑,并对胶水基准(自然语言理解),队(问题接听)和Conll-2003(命名实体识别)进行了改进。
translated by 谷歌翻译
在许多领域,我们有很好的了解有关导致结构的洞察,这将使我们训练有素的型号有用,同时仍然可以以数据驱动的方式学习。为此,我们介绍了交换干预培训的新方法(IIT)。在IIT中,我们(1)与神经模型中的表示的因果模型中的变量和(2)列车在一个神经模型中,以匹配当两个模型中的对齐表示时的基本输入上的因果模型的反事行为它们是第二源输入的值。 IIT完全可分辨,灵活地与其他目标结合,并保证目标因果模型是当其损失最小化时神经模型的ACAUSAL抽象。我们在结构化视觉任务(MNIST-PVR)和导航指令任务(REARCAN)上评估IIT。我们将IIT与多任务培训目标和数据增强进行比较。在我们的所有实验中,IIT在他们实现目标因果模型的意义上实现了最佳结果,并产生了更可观的诠释。
translated by 谷歌翻译
A recent study has shown a phenomenon called neural collapse in that the within-class means of features and the classifier weight vectors converge to the vertices of a simplex equiangular tight frame at the terminal phase of training for classification. In this paper, we explore the corresponding structures of the last-layer feature centers and classifiers in semantic segmentation. Based on our empirical and theoretical analysis, we point out that semantic segmentation naturally brings contextual correlation and imbalanced distribution among classes, which breaks the equiangular and maximally separated structure of neural collapse for both feature centers and classifiers. However, such a symmetric structure is beneficial to discrimination for the minor classes. To preserve these advantages, we introduce a regularizer on feature centers to encourage the network to learn features closer to the appealing structure in imbalanced semantic segmentation. Experimental results show that our method can bring significant improvements on both 2D and 3D semantic segmentation benchmarks. Moreover, our method ranks 1st and sets a new record (+6.8% mIoU) on the ScanNet200 test leaderboard. Code will be available at https://github.com/dvlab-research/Imbalanced-Learning.
translated by 谷歌翻译
Weakly-supervised object localization aims to indicate the category as well as the scope of an object in an image given only the image-level labels. Most of the existing works are based on Class Activation Mapping (CAM) and endeavor to enlarge the discriminative area inside the activation map to perceive the whole object, yet ignore the co-occurrence confounder of the object and context (e.g., fish and water), which makes the model inspection hard to distinguish object boundaries. Besides, the use of CAM also brings a dilemma problem that the classification and localization always suffer from a performance gap and can not reach their highest accuracy simultaneously. In this paper, we propose a casual knowledge distillation method, dubbed KD-CI-CAM, to address these two under-explored issues in one go. More specifically, we tackle the co-occurrence context confounder problem via causal intervention (CI), which explores the causalities among image features, contexts, and categories to eliminate the biased object-context entanglement in the class activation maps. Based on the de-biased object feature, we additionally propose a multi-teacher causal distillation framework to balance the absorption of classification knowledge and localization knowledge during model training. Extensive experiments on several benchmarks demonstrate the effectiveness of KD-CI-CAM in learning clear object boundaries from confounding contexts and addressing the dilemma problem between classification and localization performance.
translated by 谷歌翻译
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data. Code and models will be available at https://github.com/OpenDriveLab/PPGeo.
translated by 谷歌翻译
In this work, we focus on instance-level open vocabulary segmentation, intending to expand a segmenter for instance-wise novel categories without mask annotations. We investigate a simple yet effective framework with the help of image captions, focusing on exploiting thousands of object nouns in captions to discover instances of novel classes. Rather than adopting pretrained caption models or using massive caption datasets with complex pipelines, we propose an end-to-end solution from two aspects: caption grounding and caption generation. In particular, we devise a joint Caption Grounding and Generation (CGG) framework based on a Mask Transformer baseline. The framework has a novel grounding loss that performs explicit and implicit multi-modal feature alignments. We further design a lightweight caption generation head to allow for additional caption supervision. We find that grounding and generation complement each other, significantly enhancing the segmentation performance for novel categories. We conduct extensive experiments on the COCO dataset with two settings: Open Vocabulary Instance Segmentation (OVIS) and Open Set Panoptic Segmentation (OSPS). The results demonstrate the superiority of our CGG framework over previous OVIS methods, achieving a large improvement of 6.8% mAP on novel classes without extra caption data. Our method also achieves over 15% PQ improvements for novel classes on the OSPS benchmark under various settings.
translated by 谷歌翻译
Nearest-Neighbor (NN) classification has been proven as a simple and effective approach for few-shot learning. The query data can be classified efficiently by finding the nearest support class based on features extracted by pretrained deep models. However, NN-based methods are sensitive to the data distribution and may produce false prediction if the samples in the support set happen to lie around the distribution boundary of different classes. To solve this issue, we present P3DC-Shot, an improved nearest-neighbor based few-shot classification method empowered by prior-driven data calibration. Inspired by the distribution calibration technique which utilizes the distribution or statistics of the base classes to calibrate the data for few-shot tasks, we propose a novel discrete data calibration operation which is more suitable for NN-based few-shot classification. Specifically, we treat the prototypes representing each base class as priors and calibrate each support data based on its similarity to different base prototypes. Then, we perform NN classification using these discretely calibrated support data. Results from extensive experiments on various datasets show our efficient non-learning based method can outperform or at least comparable to SOTA methods which need additional learning steps.
translated by 谷歌翻译